Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Ergodic convergence of a stochastic proximal point algorithm

The purpose of this paper is to establish the almost sure weak ergodic convergence of a sequence of iterates (xn) given by xn+1 = (I + λnA(ξn+1, . )) (xn) where (A(s, . ) : s ∈ E) is a collection of maximal monotone operators on a separable Hilbert space, (ξn) is an independent identically distributed sequence of random variables on E and (λn) is a positive sequence in l\l. The weighted average...

متن کامل

Proximal Point Methods Revisited

The proximal point methods have been widely used in the last decades to approximate the solutions of nonlinear equations associated with monotone operators. Inspired by the iterative procedure defined by B. Martinet (1970), R.T. Rockafellar introduced in 1976 the so-called proximal point algorithm (PPA) for a general maximal monotone operator. The sequence generated by this iterative method is ...

متن کامل

Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all o...

متن کامل

Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization

We develop a family of accelerated stochastic algorithms that optimize sums of convex functions. Our algorithms improve upon the fastest running time for empirical risk minimization (ERM), and in particular linear least-squares regression, across a wide range of problem settings. To achieve this, we establish a framework, based on the classical proximal point algorithm, useful for accelerating ...

متن کامل

Approximate Newton Methods and Their Local Convergence

Many machine learning models are reformulated as optimization problems. Thus, it is important to solve a large-scale optimization problem in big data applications. Recently, stochastic second order methods have emerged to attract much attention for optimization due to their efficiency at each iteration, rectified a weakness in the ordinary Newton method of suffering a high cost in each iteratio...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM Journal on Optimization

سال: 2019

ISSN: 1052-6234,1095-7189

DOI: 10.1137/18m1230323